As a neural network compression technique, post-training quantization (PTQ) transforms a pre-trained model into a quantized model using a lower-precision data type. However, the prediction accuracy will decrease because of the quantization noise, especially in extremely low-bit settings. How to determine the appropriate quantization parameters (e.g., scaling factors and rounding of weights) is the main problem facing now. Many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization. Using this distance as the metric to optimize the quantization parameters only considers local information. We analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters. Furthermore, the quantized model suffers from overfitting due to the small number of calibration samples in PTQ. In this paper, we propose PD-Quant to solve the problems. PD-Quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters. To mitigate the overfitting problem, PD-Quant adjusts the distribution of activations in PTQ. Experiments show that PD-Quant leads to better quantization parameters and improves the prediction accuracy of quantized models, especially in low-bit settings. For example, PD-Quant pushes the accuracy of ResNet-18 up to 53.08% and RegNetX-600MF up to 40.92% in weight 2-bit activation 2-bit. The code will be released at https://github.com/hustvl/PD-Quant.
translated by 谷歌翻译
As more and more artificial intelligence (AI) technologies move from the laboratory to real-world applications, the open-set and robustness challenges brought by data from the real world have received increasing attention. Data augmentation is a widely used method to improve model performance, and some recent works have also confirmed its positive effect on the robustness of AI models. However, most of the existing data augmentation methods are heuristic, lacking the exploration of their internal mechanisms. We apply the explainable artificial intelligence (XAI) method, explore the internal mechanisms of popular data augmentation methods, analyze the relationship between game interactions and some widely used robustness metrics, and propose a new proxy for model robustness in the open-set environment. Based on the analysis of the internal mechanisms, we develop a mask-based boosting method for data augmentation that comprehensively improves several robustness measures of AI models and beats state-of-the-art data augmentation approaches. Experiments show that our method can be widely applied to many popular data augmentation methods. Different from the adversarial training, our boosting method not only significantly improves the robustness of models, but also improves the accuracy of test sets. Our code is available at \url{https://github.com/Anonymous_for_submission}.
translated by 谷歌翻译
Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.
translated by 谷歌翻译
Motion prediction is highly relevant to the perception of dynamic objects and static map elements in the scenarios of autonomous driving. In this work, we propose PIP, the first end-to-end Transformer-based framework which jointly and interactively performs online mapping, object detection and motion prediction. PIP leverages map queries, agent queries and mode queries to encode the instance-wise information of map elements, agents and motion intentions, respectively. Based on the unified query representation, a differentiable multi-task interaction scheme is proposed to exploit the correlation between perception and prediction. Even without human-annotated HD map or agent's historical tracking trajectory as guidance information, PIP realizes end-to-end multi-agent motion prediction and achieves better performance than tracking-based and HD-map-based methods. PIP provides comprehensive high-level information of the driving scene (vectorized static map and dynamic objects with motion information), and contributes to the downstream planning and control. Code and models will be released for facilitating further research.
translated by 谷歌翻译
We present a simple yet effective end-to-end Video-language Pre-training (VidLP) framework, Masked Contrastive Video-language Pretraining (MAC), for video-text retrieval tasks. Our MAC aims to reduce video representation's spatial and temporal redundancy in the VidLP model by a mask sampling mechanism to improve pre-training efficiency. Comparing conventional temporal sparse sampling, we propose to randomly mask a high ratio of spatial regions and only feed visible regions into the encoder as sparse spatial sampling. Similarly, we adopt the mask sampling technique for text inputs for consistency. Instead of blindly applying the mask-then-prediction paradigm from MAE, we propose a masked-then-alignment paradigm for efficient video-text alignment. The motivation is that video-text retrieval tasks rely on high-level alignment rather than low-level reconstruction, and multimodal alignment with masked modeling encourages the model to learn a robust and general multimodal representation from incomplete and unstable inputs. Coupling these designs enables efficient end-to-end pre-training: reduce FLOPs (60% off), accelerate pre-training (by 3x), and improve performance. Our MAC achieves state-of-the-art results on various video-text retrieval datasets, including MSR-VTT, DiDeMo, and ActivityNet. Our approach is omnivorous to input modalities. With minimal modifications, we achieve competitive results on image-text retrieval tasks.
translated by 谷歌翻译
我们提出MAPTR,这是一个结构化的端到端框架,用于有效的在线矢量化高清图构建。我们提出了一种基于统一的建模方法,即将MAP元素建模为具有一组等效排列的点集,从而避免了地图元素的定义歧义并简化学习。我们采用层次查询嵌入方案来灵活编码结构化的地图信息,并对地图元素学习执行层次结构匹配。 MAPTR在Nuscenes数据集上实现了现有的矢量化MAP构造方法的最佳性能和效率。尤其是,MAPTR-NANO以RTX 3090的实时推理速度($ 25.1 $ fps)运行,比现有的基于最新的摄像头方法快$ 8 \ times $ $,同时获得$ 3.3 $较高的地图。 Maptr-tiny在更快的速度的同时显着优于现有的最新多模式方法$ 13.5 $地图。定性结果表明,MAPTR在复杂和各种驾驶场景中保持稳定且强大的地图构造质量。可在\ url {https://github.com/hustvl/maptr}上获得丰富的演示,以证明在现实世界情景中的有效性。 MAPTR在自动驾驶中具有巨大的应用价值。代码将发布以促进进一步的研究和应用。
translated by 谷歌翻译
视频中的多目标跟踪需要解决相邻帧中对象之间一对一分配的基本问题。大多数方法通过首先丢弃不可能的对距离大于阈值的不可能对解决问题,然后使用匈牙利算法将对象链接起来以最大程度地减少整体距离。但是,我们发现从重新ID特征计算出的距离的分布可能在不同的视频中有很大差异。因此,没有一个最佳阈值可以使我们安全丢弃不可能的对。为了解决该问题,我们提出了一种有效的方法来实时计算每对对象的边际概率。边际概率可以视为标准化距离,比原始特征距离明显稳定。结果,我们可以为所有视频使用一个阈值。该方法是一般的,可以应用于现有的跟踪器,以在IDF1度量方面获得大约一个点改进。它在MOT17和MOT20基准上取得了竞争成果。此外,计算的概率更容易解释,从而有助于后续后期处理操作。
translated by 谷歌翻译
在这项工作中,我们为基于视觉的不均衡的BEV表示学习提出了PolarBev。为了适应摄像机成像的预先处理效果,我们将BEV空间横向和辐射上栅格化,并引入极性嵌入分解,以模拟极性网格之间的关联。极性网格被重新排列到类似阵列的常规表示,以进行有效处理。此外,为了确定2到3D对应关系,我们根据假设平面迭代更新BEV表面,并采用基于高度的特征转换。PolarBev在单个2080TI GPU上保持实时推理速度,并且在BEV语义分割和BEV实例分割方面都优于其他方法。展示彻底消融以验证设计。该代码将在\ url {https://github.com/superz-liu/polarbev}上发布。
translated by 谷歌翻译
随着可解释的人工智能(XAI)的快速发展,过去的一系列工作表明,基于扰动后的HOC XAI模型中对分布外(OOD)问题的担忧和解释在社会上是错误对准的。我们探讨了使用近似值来模仿黑盒模型的行为的事后解释方法的局限性。然后,我们提出了基于解释的反事实再培训(XCR),提取迅速提取的特征。 XCR应用了XAI模型生成的解释作为反事实输入,以重新培训黑框模型来解决OOD和社会错位问题。对流行图像数据集的评估表明,XCR只能保留12.5%的最关键功能而不更改黑框模型结构时,可以改善模型性能。此外,对腐败数据集基准的评估表明,XCR对改善模型鲁棒性非常有帮助,并积极影响OOD问题的校准。即使没有像某些OOD校准方法那样在验证集中进行校准,但损坏的数据度量标准的表现优于现有方法。如果应用了验证集上的校准,我们的方法还可以在OOD校准度量上使用当前的OOD校准方法。
translated by 谷歌翻译
基于环绕视图摄像机系统的3D检测是自动驾驶中的一项关键技术。在这项工作中,我们提出了3D检测的极性参数化,该参数化重新定义了偏振系统中的位置参数化,速度分解,感知范围,标签分配和损失函数。极性参数化建立了图像模式与预测目标之间的明确关联,从而利用环绕视觉摄像机的视图对称性为感应偏置,以减轻优化和增强性能。基于极性参数化,我们提出了一个名为polardetr的环绕视图3D检测变压器。Polardetr在不同的主链配置上实现了有希望的性能速度权衡。此外,在提交时间(2022年3月4日)的3D检测和3D跟踪方面,Polardetr在Nuscenes基准的排行榜上排名第一。代码将以\ url {https://github.com/hustvl/polardetr}发布。
translated by 谷歌翻译